34 research outputs found

    Robust Data-driven Prescriptiveness Optimization

    Full text link
    The abundance of data has led to the emergence of a variety of optimization techniques that attempt to leverage available side information to provide more anticipative decisions. The wide range of methods and contexts of application have motivated the design of a universal unitless measure of performance known as the coefficient of prescriptiveness. This coefficient was designed to quantify both the quality of contextual decisions compared to a reference one and the prescriptive power of side information. To identify policies that maximize the former in a data-driven context, this paper introduces a distributionally robust contextual optimization model where the coefficient of prescriptiveness substitutes for the classical empirical risk minimization objective. We present a bisection algorithm to solve this model, which relies on solving a series of linear programs when the distributional ambiguity set has an appropriate nested form and polyhedral structure. Studying a contextual shortest path problem, we evaluate the robustness of the resulting policies against alternative methods when the out-of-sample dataset is subject to varying amounts of distribution shift

    Lagrangian duality for robust problems with decomposable functions: the case of a robust inventory problem

    Get PDF
    We consider a class of min-max robust problems in which the functions that need to be “robustified” can be decomposed as the sum of arbitrary functions. This class of problems includes many practical problems, such as the lot-sizing problem under demand uncertainty. By considering a Lagrangian relaxation of the uncertainty set, we derive a tractable approximation, called the dual Lagrangian approach, that we relate with both the classical dualization approximation approach and an exact approach. Moreover, we show that the dual Lagrangian approach coincides with the affine decision rule approximation approach. The dual Lagrangian approach is applied to a lot-sizing problem, in which demands are assumed to be uncertain and to belong to the uncertainty set with a budget constraint for each time period. Using the insights provided by the interpretation of the Lagrangian multipliers as penalties in the proposed approach, two heuristic strategies, a new guided iterated local search heuristic, and a subgradient optimization method are designed to solve more complex lot-sizing problems in which additional practical aspects, such as setup costs, are considered. Computational results show the efficiency of the proposed heuristics that provide a good compromise between the quality of the robust solutions and the running time required in their computation.publishe

    On Dynamic Programming Decompositions of Static Risk Measures in Markov Decision Processes

    Full text link
    Optimizing static risk-averse objectives in Markov decision processes is difficult because they do not admit standard dynamic programming equations common in Reinforcement Learning (RL) algorithms. Dynamic programming decompositions that augment the state space with discrete risk levels have recently gained popularity in the RL community. Prior work has shown that these decompositions are optimal when the risk level is discretized sufficiently. However, we show that these popular decompositions for Conditional-Value-at-Risk (CVaR) and Entropic-Value-at-Risk (EVaR) are inherently suboptimal regardless of the discretization level. In particular, we show that a saddle point property assumed to hold in prior literature may be violated. However, a decomposition does hold for Value-at-Risk and our proof demonstrates how this risk measure differs from CVaR and EVaR. Our findings are significant because risk-averse algorithms are used in high-stake environments, making their correctness much more critical

    Robust self-scheduling of a price-maker energy storage facility in the New York electricity market

    Get PDF
    Recent progress in energy storage raises the possibility of creating large-scale storage facilities at lower costs. This may bring economic opportunities for storage operators, especially via energy arbitrage. However, storage operation in the market could have a noticeable impact on electricity prices. This work aims at evaluating jointly the potential operating profit for a price-maker storage facility and its impact on the electricity prices in the New York state market. Based on historical data, lower and upper bounds on the supply curve of the market are constructed. These bounds are used as an input for the robust self-scheduling problem of a price-maker storage facility. Our computational experiments show that the robust strategies thus obtained allow to reduce significantly the loss exposure while maintaining reasonably high expected profits

    A Survey of Contextual Optimization Methods for Decision Making under Uncertainty

    Full text link
    Recently there has been a surge of interest in operations research (OR) and the machine learning (ML) community in combining prediction algorithms and optimization techniques to solve decision-making problems in the face of uncertainty. This gave rise to the field of contextual optimization, under which data-driven procedures are developed to prescribe actions to the decision-maker that make the best use of the most recently updated information. A large variety of models and methods have been presented in both OR and ML literature under a variety of names, including data-driven optimization, prescriptive optimization, predictive stochastic programming, policy optimization, (smart) predict/estimate-then-optimize, decision-focused learning, (task-based) end-to-end learning/forecasting/optimization, etc. Focusing on single and two-stage stochastic programming problems, this review article identifies three main frameworks for learning policies from data and discusses their strengths and limitations. We present the existing models and methods under a uniform notation and terminology and classify them according to the three main frameworks identified. Our objective with this survey is to both strengthen the general understanding of this active field of research and stimulate further theoretical and algorithmic advancements in integrating ML and stochastic programming

    The decision rule approach to optimization under uncertainty: methodology and applications

    Get PDF
    Dynamic decision-making under uncertainty has a long and distinguished history in operations research. Due to the curse of dimensionality, solution schemes that naïvely partition or discretize the support of the random problem parameters are limited to small and medium-sized problems, or they require restrictive modeling assumptions (e.g., absence of recourse actions). In the last few decades, several solution techniques have been proposed that aim to alleviate the curse of dimensionality. Amongst these is the decision rule approach, which faithfully models the random process and instead approximates the feasible region of the decision problem. In this paper, we survey the major theoretical findings relating to this approach, and we investigate its potential in two applications areas

    Worst-case regret minimization in a two-stage linear program

    No full text
    In this talk, we explain how two-stage worst-case regret minimization problems can be reformulated as two-stage robust optimization models. This allows us to employ both approximate and exact solution methods that are available in the recent literature to fficiently identify good solutions for these hard problems. In particular, our numerical experiments indicate that affine decision rules are particularly effective at identifying good conservative solutions for three different types of decision problems: a multi-item newsvendor problem, a lot-sizing problem, and a production-transportation problem.Non UBCUnreviewedAuthor affiliation: HEC MontréalResearche
    corecore